Algorithms Are Redrawing the Space for Cultural Imagination

The age of the algorithm marks the moment when technical memory has evolved to store not just our data but far more sophisticated patterns of practice, from musical taste to our social graphs.
The more we invest ourselves in algorithms, the further we proceed down a path of collaboration.
By: Ed Finn

The story of the algorithm has always been a love affair. Like every romance, the attraction is based on both familiarity and foreignness — the recognition of ourselves in another as well as the mystery of that other. As technical systems, algorithms have always embodied fragments of ourselves: our memories, our structures of knowledge and belief, our ethical and philosophical foundations. They are mirrors for human intention and progress reflecting back the explicit and tacit knowledge that we embed in them. At the same time they provide the essential ingredient of mystery, operating according to the logics of the database, complexity, and algorithmic iteration, calculating choices in ways that are fundamentally alien to human understanding.

cover for Ed Finn's book "What Algorithms Want."
This article is adapted from Ed Finn’s book “What Algorithms Want.”

Humanity has always been driven by twinned desires: to have perfect knowledge of the universe and perfect knowledge of ourselves. On the surface, it may seem as if the apotheosis of the algorithm in contemporary culture has not brought us any closer to the consummation of these desires. The universal encyclopedia of facts and insights we have assembled to date is vast but inconsistent, full of contradictions, misinformation, and dangling references.

In many ways our collective self-understanding seems to have advanced very little despite decades of research in neuroscience, psychology, economics, and other relevant fields. The versions of ourselves we see online in databases and user profiles still seem like crude caricatures. We eagerly add to the list of “effectively computable” problems, inviting algorithms to trade stocks, drive cars, and suggest potential dates. But the promised salvation of algorithmic theology stubbornly remains in the distant future: the clunky, disjointed implementation of the computational layer on cultural life leaves much yet to be desired. Algorithms still get it wrong far too often to make a believable case for transcendent truth.

The promised salvation of algorithmic theology stubbornly remains in the distant future.

And yet the seduction remains. For every step that computational systems take in mastering a new field of practice, from understanding natural speech to composing music, humanity also takes a step to close the gap. We shape ourselves around the cultural reality of code, shoring up the facade of computation where it falls short and working feverishly to extend it, to complete the edifice of the ubiquitous algorithm. Some of these choices are simple, even pedantic, like adjusting our speech patterns to (we hope) make our statements easier for machines to understand. Others are far more subtle, like the temptation to organize one’s weekend for optimal selfie opportunities, or the hidden biases encoded in nominally objective code. We depend on computational systems for a growing share of the raw material of intellectual life, from books and news stories to the very basics, like vocabulary, ideas to share, and people to share them with. The more we invest ourselves in these culture machines, the further we proceed down a path of collaboration. More than collaboration: a kind of co-identity. We are coming to define who we are through digital practice because virtual spaces are becoming more real to us than visceral ones.

We are coming to define who we are through digital practice because virtual spaces are becoming more real to us than visceral ones.

Take Google’s quest to build the “Star Trek” computer. The company has made tremendous strides in finding and indexing the world’s information. Many of us rely on Google not just for search and access but as a repository for all sorts of personal data, from emails and photographs to biometric measurements of our exercise and diet routines. The company’s success at storing and deploying these different forms of information does more than just move its own systems closer to the “Star Trek” goal — that is, a perfect map of shared, external knowledge — it also moves its users closer. Researchers have demonstrated that using Google changes our memory practices, a fact anyone over 30 can prove: just ask yourself how many phone numbers you remember now as opposed to before you owned your first cell phone. The same is true for composition using word processors, for the impact of social media on identity formation, and so on. These are small observations of the sea change, the many ways in which digital culture, memory, and identity are evolving as our core practices of reading, writing, conversation, and thinking become digital processes. Through brain plasticity and changing social norms, we are adapting ourselves to become more knowable for algorithmic machines.

In this way we are evolving, as media theorist N. Katherine Hayles and others have argued, in conjunction with our technical systems, slowly moving toward some consummation of the algorithmic love affair. The risk of disaster haunts us — the consummation might become a collision, an explosion of the kinds we linger on in stories like the “Terminator” series and through institutions like the Future of Life Institute (funded by Elon Musk and others to avert the “Terminator” AI apocalypse). But there is a more optimistic vision as well, one where humans engage in productive collaboration with computational systems: the “Star Trek” future, or a more ambitiously AI-fueled society like science fiction author Iain Banks’s Culture novels. Indeed, we spend so much time worrying about the rise of a renegade independent artificial intelligence that we rarely pause to consider the many ways in which we are already collaborating with autonomous systems of varied intelligence. This moves far beyond our reliance on digital address books, mail programs, or file archives: Google’s machine learning algorithms can now suggest appropriate responses to emails, and AlphaGo gives grandmasters of that venerable art form some of their most interesting games.

Researchers have demonstrated that using Google changes our memory practices, a fact anyone over thirty can prove.

Widening the scope further, we can begin to see how we are changing the fundamental terms of cognition and imagination. The age of the algorithm marks the moment when technical memory has evolved to store not just our data but far more sophisticated patterns of practice, from musical taste to our social graphs. In many cases we are already imagining in concert with our machines. Algorithmic systems curate the quest for knowledge, conversing and anticipating our interests and informational needs. They author with us, providing scaffolding, context, and occasionally direct material for everything from “House of Cards” to algorithmically vetted pop music. The horizon of imaginative possibility is increasingly determined by computational systems, which manufacture and curate the serendipity and informational flow that propels the lifecycle of ideas, of discourse, of art. In other words, the twin quests that have dominated the history of civilization are increasingly codeterminate with the desire embedded in “effective computability” to make everything computable.

The space of imagination exists in algorithmic context, and that which cannot be computed cannot be fully integrated into the broader fabric of culture as we live it now. We are grappling with the consequences of code through the many boundary cases of human experience and cultural work that trouble contemporary algorithmic culture. The role of the curator, the editor, and the critic is more important than ever, as we draw the lines of effective computability and struggle to learn and remember the things that our computational systems do not or cannot know.


Ed Finn is Founding Director of the Center for Science and the Imagination at Arizona State University, and the author of “What Algorithms Want,” from which this article is adapted.

Posted on
The MIT Press is a mission-driven, not-for-profit scholarly publisher. Your support helps make it possible for us to create open publishing models and produce books of superior design quality.